perm filename ARP83.TEX[PUB,LES] blob sn#722627 filedate 1986-09-09 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00014 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	\input basic
C00006 00003	\ctrline{\:M Basic Research}
C00008 00004	\sect Introduction.
C00010 00005	\sect Formal Reasoning.
C00041 00006	\sect Advice-taking ANALYST.
C00082 00007	\sect Formalisms for Reasoning about Programs.
C00119 00008	\sect EKL.
C00126 00009	\sect Lisp Performance Evaluation.
C00134 00010	\sect Automatic Construction of Special-purpose Programs.
C00157 00011	\sect Epistemology and Formalization of Concurrent and Continuous Action.
C00161 00012	\sect Proving Facts About Lisp.
C00164 00013	\sect The Formal Reasoning Group.
C00169 00014	% References
C00179 ENDMK
C⊗;
\input basic
\jpar 1000
\varunit 1truein
\parskip .15in plus 5pt \baselineskip 15pt \lineskip 1pt
\def\rm{\:a} \def\sl{\:n} \def\bf{\:q} \def\it{\:?}
\font m=cmsc10 		%  Small caps
\def\sc{\:m}		%  Small Caps  (10pt)
\font N=cmdunh		%  Titles
\font M=cmr18 		%  Titles
\font <=cmb10		%  Bolder
\font t=cmtt9[tex,sys]
%\output{\advcount0\baselineskip 18pt\page}
\output{\baselineskip 18pt\page\ctrline{\curfont a\count0}\advcount0}
\def\sect#1.{\vskip 20pt plus20pt minus 7pt\penalty -100
   \secta#1.}
\def\secta#1.{\noindent{\bf \count1.\quad #1}\par\penalty 1000\advcount1\par
\vskip 5pt plus4pt minus 3pt\penalty 1000}

\setcount1 1
\setcount0 1

\def\ssect#1.{\yyskip{\bf\penalty -50\hbox{\vbox{\hbox{#1}}}
\penalty 800}} 
\def \ni{\noindent}

\def\textindent#1{\noindent\hbox to 19pt{\hskip 0pt plus 1000pt minus 1000pt#1\ 
\!}}
\def\hang{\hangindent 19pt}

\def\numitem#1{\yskip\hang\textindent{#1}}

\def\ref:#1.#2.{[#1;#2]}

\def	\bcode	{\par\vfil\penalty400\vfilneg\parskip 0pt plus 1 pt\vskip .25 in\: t\beginpassasis}
\def	\ecode  {\endpassasis\: a\par\vskip .05 in\vfil\penalty500\vfilneg\parskip .1 in plus 5pt}

\input pai[can,rpg]

\def\eat#1\endeat{}	% macro for extended comments
\def\makeref:#1.#2.#3{\par{\noindent \parshape 2 0pt 353pt
20pt 333pt \hbox{[#1;#2]}\linebreak 
{#3}}\par\parshape 0\vfil\penalty 0\vfilneg}
\def\article#1{{\sl #1}}
\def\book#1{{\:< ``#1''}}
\def\yyskip{\penalty-100\vskip8pt plus6pt minus4pt}
\def\yskip{\penalty-50\vskip 3pt plus 3pt minus 2pt}
\def\bslskp{\baselineskip 15pt}
\ctrline{\:M Basic Research}
\vskip .2truein
\ctrline{\:M in}
\vskip .2truein
\ctrline{\:M Artificial Intelligence}
\vskip .2truein
\ctrline{\:M and}
\vskip .2truein
\ctrline{\:M Formal Reasoning}

\vfill

\noindent {\bf Personnel}: John\ McCarthy, Lewis\ Creary, Richard\ Gabriel, 
Christopher\ Goad,
Jussi\ Ketonen, Carolyn\ Talcott, Yoram\ Moses, Joseph\ Weening

\vfill\eject
\sect Introduction.

This proposal is a request for renewal and continuation of our 1982
proposal to the Defense Advanced Research Projects Agency.  Each section
presents a research overview for a particular project, reports
accomplishments, and proposes new objectives. It is assumed that the
reader has some familiarity with the 1982 proposal.

Applied research requires basic research to replenish the stock of ideas
on which its progress depends.  The long range goals of work in basic AI
and formal reasoning are to make computers carry out the reasoning
required to solve problems.  Our work over the past few years has made it
considerably clearer how our formal approach can be applied to AI and MTC
problems.  This brings application nearer, especially in the area of
mathematical theory of computation.

\vfill\eject
\sect Formal Reasoning.

\ssect Accomplishments.

John McCarthy continued his work in formalization
of the commonsense facts about the world needed for intelligent
behavior and also work in formalization of computer programs.

McCarthy's personal research concerns the formalization of commonsense
knowledge and commonsense reasoning in mathematical/logical languages.
Such formalization is required to represent the information in the memory
of a computer and to make programs that can reason with the information in
order to solve problems.  It is now widely agreed (for example, by
Newell, Nilsson, and Minsky) that progress in artificial intelligence is
limited by our ability to express commonsense knowledge and reasoning.
There isn't widespread agreement on how to do this, and McCarthy's work
explores one end of the spectrum of ideas---using mathematical logic.

	Since 1981, McCarthy made substantial progress on three
problems:

\ni 1. Non-monotonic reasoning.

	In 1980 McCarthy published ``Circumscription: a method
of non-monotonic reasoning.''  Since that time circumscription
has been further developed.  A forthcoming paper will describe
a new formalization and many new applications.

There seem to be many kinds of non-monotonic reasoning formalizable by
circumscription, and it isn't clear how to unify them.  It also isn't
clear whether circumscription is a universal form of non-monotonic
reasoning or whether other forms will be required.  Circumscription is
known to subsume many examples of non-monotonic reasoning given earlier by
McDermott and Doyle, and by Reiter; and there are no outstanding cases
where it is not known how to do it.

	Here are some example uses of circumscription.

\numitem{a.}As a communication convention.  Suppose  A  tells  B  about
a situation involving a bird.  If the bird may not be able to fly, and this
is relevant to solving the problem, then  A  should mention
the relevant information.  Whereas if the bird can fly, there is
no requirement to mention the fact.

\numitem{b.}As a database or information storage convention.  It can be a
convention of a particular database that certain predicates have their
minimal extension.  This generalizes the closed-world assumption of
Reiter.  When a database makes the closed-world assumption for all
predicates it is reasonable to imbed this fact in the programs that use
the database.  However, when there is a priority structure among the
predicates, we need to express the priorities as sentences of the
database, perhaps included in a preamble to it.

\numitem{}Neither 1 nor 2 requires that most birds can fly.  Should it happen in
situations 1 and 2 above that most birds cannot fly, the convention may
lead to inefficiency but not incorrectness.

\numitem{c.} As a rule of conjecture.  This use was emphasized in
\ref:McCarthy.1980..  The circumscriptions may be regarded as expressions
of some probabilistic notions such as `most birds can fly' or they may be
expressions of simple cases.  Thus it is simple to conjecture that there
are no relevant present material objects other than those whose presence
can be inferred.  It is also a simple conjecture that a tool asserted to
be present is usable for its normal function.  Such conjectures sometimes
conflict, but there is nothing wrong with having incompatible conjectures
on hand.  Besides the possibility of deciding that one is correct and the
other wrong, it is possible to use one for generating possible exceptions
to the other.

\numitem{d.}As a representation of a policy.  The example is Doyle's ``The meeting
will be on Wednesday unless another decision is explicitly made.''

\numitem{e.}As a very streamlined expression of probabilistic information when
numerical probabilities, especially conditional probabilities, are
unobtainable.  Since circumscription doesn't provide numerical
probabilities, its probabilistic interpretation involves probabilities that
are either infinitesimal, within an infinitesimal of one, or
intermediate---without any discrimination among the intermediate values.
Circumscriptions give conditional probabilities.  Thus we may treat
the probability that a bird cannot fly as an infinitesimal.  However, if
the rare event occurs that the bird is a penguin, then the conditional
probability that it can fly is infinitesimal, but we may hear of some rare
condition that would allow it to fly after all.

\numitem{}Why don't we use finite probabilities combined by the usual laws?  That
would be fine if we had the numbers, but circumscription is usable when we
can't get the numbers or find their use inconvenient.  Note that the
general probability that a bird can fly may be irrelevant, because we are
interested in particular situations which weigh in favor or against a
particular bird flying.

\numitem{}Notice that circumscription does not provide for weighing evidence; it is
appropriate when the information permits snap decisions.  However, many
cases nominally treated in terms of weighing information are in fact cases
in which the weights are such that circumscription and other defaults work
better.  

\ni 2. Formalization of knowledge about knowledge.
The importance of representing knowledge about knowledge is discussed
elsewhere in this proposal and our previous proposals.  In particular,
the proposed `intelligence analyst' requires knowledge about
knowledge.

McCarthy developed a first-order language that admits Kripke-style
possible worlds as objects and uses the Kripke accessibility relation to
formalize the knowledge that a person has of the value of an expression.
McCarthy showed how to use this formalism to express the facts of a
difficult mathematical puzzle involving knowledge---the following puzzle
of `Mr. S and Mr. P.'

	Two numbers are chosen between 2 and 99 inclusive.  Mr. P
is told the product and Mr. S is told the sum.  The following
dialog occurs:

\ni Mr. P: I don't know the numbers.

\ni Mr. S: I knew you didn't know them.  I don't know them either.

\ni Mr. P: Now I know the numbers.

\ni Mr. S: Now I know the numbers.

\ni Problem: What are the numbers?

\ni Answer: The numbers are 4 and 13.

	The key difficulty to formalizing `Mr. S and Mr. P' comes from
the need to represent ignorance.  How do we represent the facts that
all Mr. S initially knows about the numbers is given by his knowledge
of the sum and that all he knows about Mr. P's knowledge is that Mr.
P knows the product?  Representation of this ignorance is accomplished
by asserting the existence of a sufficiently comprehensive collection
of possible worlds accessible to Mr. S.  Using the possible worlds
in the language itself rather than regarding them as metalinguistic
devices has been uncongenial to logicians.  However, none of the experts on
modal logic to whom the problem has been submitted were able to
formulate it directly in modal logic.

This example problem attracted our attention to an issue that is
fundamental to all programs that reason about knowledge---the
representation of ignorance.  Thus the advice-taking ANALYST (discussed
below) program may conjecture that all the Russians should know about a
certain radar is what can be deduced from its appearance, where it is
used, and the articles in Aviation Week.  We can express this by saying
that any plausible conjecture compatible with this information is a
possibility for them.  If they behave so as to rule out such a
possibility, then we conjecture that they have an additional source of
information.  The explicitly Kripkean formalization developed in
connection with `Mr. S and Mr. P' should work here as well.

	McCarthy's formalization of `Mr. S and Mr. P' was modified
by Ma Xiwen, a Chinese visitor, and used for an FOL proof that
transformed the problem into a purely arithmetic problem.  Unfortunately,
McCarthy found an error in Ma's axioms, and it isn't clear that
when the error is fixed, the proof still works.  Also the axioms
describing the effect of learning---what happens when Mr. P
learns that Mr. S knew he didn't know the numbers---seem to have
some defects.  Formalizing the effect of learning that other people
know something (or do not) is required for making the ANALYST program
work and in any program with similar capabilities.

\ni 3. Introspection in logic programming.  In the early 1970's Alain
Colmerauer of Marseilles University in France and Robert Kowalski of
Imperial College in London proposed the use of first order logic as a
programming language, and the Marseilles group subsequently embodied these
ideas in a programming language called Prolog.  Prolog has become popular
in Europe for artificial intelligence use, and the Japanese `Fifth
Generation' project proposes to use logic programming.  Of people who have
used both Lisp and Prolog, some prefer one and some the other.  Alan
Robinson and others have tried to develop languages that allow both styles
of programming.  McCarthy has tried Prolog and still prefers Lisp.

However, certain issues of programming style come up more clearly in
Prolog than in Lisp.  One of these can be called `introspection,' wherein
a program uses itself and its memory state as data.  This is similar to
`reflection' as studied by Brian Smith in his well-known Harvard PhD
thesis.

Kowalski proposed a slogan
``$\hbox{Algorithm}=\hbox{logic}+\hbox{control}$,'' and proposed that
Prolog permitted a neat separation.  McCarthy has investigated this claim
using a map-coloring logic program devised by Luis Pereira and Antonio
Porto of the University of Lisbon in Portugal.  He noted that two
efficient algorithms for map coloring could be regarded as having the same
logic as the original Pereira-Porto logic program but more sophisticated
control.

One of these algorithms involved a `postponement heuristic' that
re-ordered goals so that goals that could still be realized regardless of
how the previous goals were achieved were put last.  In the map-coloring
case, after some goals have been postponed, still others become
postponable; this process can be repeated.

	The second algorithm uses so-called Kempe transformations
to modify assignments already made to variables when the algorithm
gets stuck.  It is far superior to backtracking for map-coloring,
but it can still be described as control applied to the original
Pereira-Porto logic.  However, this kind of control requires that
the program, when stuck, go into an `introspective mode' wherein
it looks at data structures created by the interpreter that are
not ordinarily accessible to the program itself.  This is the
work whose ideas overlap those of Brian Smith.  Smith applied them
to Lisp, but has recently been exploring the idea that they are
even more applicable to a logic programming language such as
Prolog.

	The two kinds of introspection can be called syntactic
and semantic.  The postponement heuristic is syntactic in that
it involves looking at the goals and re-ordering them before
attempting to realize any of them.  The Kempe heuristic is
semantic in that it doesn't re-order the goals, but modifies
the order in which alternatives are tried in a task-dependent
manner and requires that the program look at its own stack
of tasks.

	McCarthy's results show that in this case the Kowalski
doctrine can be maintained in the map-coloring case.  Both algorithms
can be regarded as sophisticated control applied to simple logic.
The control mechanisms required in the two cases go far beyond
what the logic programmers---for example, Keith Clark, Pereira and Porto,
and Herve Gallaire---had been willing to contemplate.  While they
have some generality it seems likely that additional problems
will require additional mechanism.

	So far it hasn't proved especially convenient to program
by beginning with a simple logic and adding a sophisticated control.
However, such techniques should make it easier to prove the
correctness of programs.  Further exploration of the Kowalski
doctrine applied to logic programming may turn up interesting
and useful methods of control and other heuristics.

\ref:McCarthy.1982. ``Coloring Maps and the Kowalski Doctrine''
presents some of these ideas.

\ssect Objectives.

	McCarthy will continue his work on formalizing commonsense
reasoning during the period of the proposal.  The following will be
emphasized.

\ni 1. Inventory of commonsense representation, commonsense knowledge
and commonsense reasoning human capabilities.

There is widespread agreement that lack of general commonsense knowledge
is the key present limitation on the applicability of AI programs.
However, each of the authors making this point (for example: Newell, Nilsson,
Minsky, Genesereth and McCarthy) has had to content himself with giving
examples.  The time seems to be ripe to attempt a comprehensive list.
McCarthy will prepare a paper giving such a list during 1983 and plans to
use this paper as a basis for discussion with other AI researchers.  It is
extremely unlikely that the initial list will be accepted as
comprehensive, and we plan further editions over the three-year period of
the contract.  Insofar as our list is comprehensive, individual AI
researchers can concentrate their attention on specific aspects of commonsense
knowledge or reasoning.

\ni 2. Formalization of additional commonsense concepts.  A
key problem that has been around for many years is that of formalizing
the consequences of actions where concurrent events are taking
place.  There are some new ideas, and Yoram Moses is exploring
writing a thesis in this area.  So far as we know, no existing AI programs
can handle this at all.

	The simple blocks world formalisms have long suffered from
the `frame problem' of specifying what doesn't change when an action
takes place.  McCarthy has developed a formalization using circumscription
that avoids requiring that the description of an action describe
what doesn't change.  He will try to elaborate this into a general
solution of the frame problem.

\ni 3. Non-monotonic reasoning.  Circumscription has mathematical
properties related to the notion of satisfaction in mathematical logic.
McCarthy has shown (unpublished) that in the propositional case determining
the models resulting from circumscribing a formula leads to a tree
of satisfaction problems.  The useful case of predicate circumscription
should lead to a generalization of the satisfaction problem.  Progress
probably depends on attracting the attention of mathematical logicians
to the problem.  Yoram Moses has started on some aspects of it.

	McCarthy has identified two important properties of the human intellect
that have not been put in computer programs.  We call them `ambiguity
tolerance' and `elaboration tolerance,' and they can both be treated by
circumscription.  (Ambiguity tolerance was mentioned vaguely by Dreyfus
in his book ``What Computers Can't Do'' as something that computers
can't do).

	`Ambiguity tolerance' refers to the fact that humans successfully
use concepts that are subsequently shown to be ambiguous.  Even after
the ambiguity is understood a person uses the ambiguous form of the
concept in situations in which the ambiguity doesn't show up.
If all concepts used by AI programs were required to be unambiguous, it
would mean that all possible philosophy would have to be done before
any AI.  Therefore, AI programs must use ambiguous concepts.  It seems
that circumscription or other non-monotonic reasoning can be used to
allow the inference that a concept is unambiguous in a particular
instance.

`Elaboration tolerance' is a similar phenomenon.  We can use a
representation in which the location of a university, or even a building, is
considered independent of the situation.  Nevertheless, if it is necessary
to consider elaborating the representation in order to consider moving the
institution, this can be done.  Again circumscription can be used to make
this capability available to programs.

\ni 4. Identifying intellectual mechanisms.

Besides those based on non-monotonic reasoning, human intelligence
embodies many mechanisms not yet incorporated in formal systems.  McCarthy
has uncovered some of these and plans to study them.  Here are examples:

\numitem{a.}Common Business Communication Language

\numitem{}McCarthy (1982) explores the requirements for business communication
between programs belonging to different organizations.  For example, a
purchasing program belonging to one company might inquire about the price
and delivery of some product and place an order.  Similarly a program
belonging to one military organization might inquire about the readiness
of airplanes or the availability of certain spare parts.  With a suitable
authorization it might issue a requisition.  When this problem was first
considered it was supposed that it was just a standardization
problem, but further examination shows that it involves formalizing
the semantics (not syntax) of a substantial and interesting
fragment of natural language.  The present languages used for
communication between AI programs and their users are inadequate
to express the concepts required.  The internal languages so far used
in AI are also inadequate.

\numitem{b.}Modifying programs without reading them.

\numitem{}Modifying the behavior of a computer program usually requires a detailed
study of the relevant part of the program and the data structures it uses.
Much less is required in communicating with a human.  For example, the
head of an airline might tell his subordinates that Zoroastrian
passengers should receive a complementary drink if they fly on Zoroaster's
birthday.  He does not have to mention or even know how records are
kept---whether in the head of ticket agent for a very small airline, in
ledgers for an old-fashioned airline, or in a computer.  The reason he can
do this is that he is allowed to refer to external events---not just
inputs---in expressing his wishes.  McCarthy has noticed that with
suitable programming languages computers can also be instructed without
understanding their internal workings.  The Elephant language, developed
by McCarthy, obviates the need to mention many data structures, because
the user can refer to past events without referring to any data structure
that remembers past events.  We plan to explore these possibilities
further.


\numitem{c.}Reification.

\numitem{}Human thought and language often makes nouns of verbs
and adjectives.  Some examples are: think $\→$ thought, red $\→$ redness,
to attack $\→$ attack.  Philosophers call this `to reify,' i.e.
`to make a thing out of' from the Latin word `re' meaning `thing.'
They have mainly been concerned with cases in which reification
leads to error.

\numitem{}However, reification seems to be an important intellectual
process and AI programs will have to do it.  For instance, the question:
`How many things are still wrong with the boat now that you have fixed
some of them?'  requires the reification of `things that are still wrong'
and `things that you have fixed.'  We plan to study reification and to
formalize some AI examples in which it is useful.

\vfill\eject
\sect Advice-taking ANALYST.

Lewis Creary and Richard Gabriel are collaborating (Creary full-time, Gabriel
quarter-time) on implementing a prototype version of the advice-taking ANALYST
program.  The motivation, design criteria, basic approach, and leading ideas
for this program were described fully in our previous proposal (1981) and will
not be repeated in detail here.  To summarize briefly, the basic goal of the
project is to construct an advanced, advice-taking problem solver for
analyzing interesting kinds of ``intelligence'' information---in short, a
first incarnation of the ``intelligence'' ANALYST mentioned in our 1979
proposal as a potential long-range application.  This program will attempt to
infer concealed facts from known facts and from reasonable conjectures that it
has made.  It will deal with information about beliefs, desires, purposes,
preferences, aversions, and fears.  It will ascribe plans and goals to people
on the basis of their observed actions and commonsense background knowledge,
and it will use these ascribed plans and goals to predict future actions.  Our
basic approach to construction of an ANALYST stems from John McCarthy's design
sketch for an ``Advice Taker'' \ref:McCarthy.1959. and presently is based on
the following main ideas:

\ni 1. A general framework for the structuring, description, acquisition, and
use of specialized knowledge as the foundation for a domain-independent
commonsense problem solver.  Some of the specialized knowledge will concern
the application domain, and some will be meta-knowledge concerning the
structure, capabilities, etc. of the ANALYST program itself.

\ni 2. Integration of the frames/units and predicate calculus approaches to
knowledge representation.

\ni 3. Extension of the standard predicate calculus formalism to permit
representation of adverbial constructions, sortal quantifiers, information
concerning beliefs, desires and intentions, causal connections between states
of affairs, subjunctive conditional statements, and $λ$-abstraction.

\ni 4. Fregean representation of propositional attitudes, coupled with the
simulative use of the program's own human-like thought mechanisms as a means
of recognizing and analyzing the beliefs, desires, plans, and intentions of
people and other intelligent organisms.

\ni 5. A {\sl competing considerations} model of commonsense factual
reasoning, in which reasoning involves the weighing and composition of all
reasonably discoverable considerations bearing on a set of target
propositions as the basis for drawing conclusions.

\ni 6. Comparative partial matching as the basis for selection of the most
appropriate methods, procedures, and abstract plan schemata during problem
solving.

\ni 7. Interactive advice seeking and taking as a means of knowledge acquisition.

\ssect Accomplishments.

\ni 1. A new logical language was designed for use by the ANALYST to
represent commonsense information.

An essential requirement for effective commonsense reasoning and
problem-solving is the ability to represent commonsense knowledge about the
everyday world in a way that accurately reflects its logical consequences.
Providing such an ability has not proved easy for AI researchers, since the
standard predicate calculus and its higher-order relatives, while sufficient
for most mathematical purposes, appear to be inadequate for representing many
important commonsense concepts in a natural way.  On the other hand, while
non-logical ad-hoc representation schemes can be effective in special cases,
they tend to lack the flexibility and generality needed for commonsense
s,soning, and needlessly forego the benefits of a systematic truth-semantics
that are available to logically-oriented representation schemes.

To meet this problem a new logical language named NFLT was designed as the
``thought-vehicle'' for the advice-taking ANALYST.  `NFLT' is an acronym for
`Neo-Fregean Language of Thought.'  NFLT is the result of augmenting and
partially reinterpreting the standard predicate calculus formalism to permit
representation of adverbial constructions; sortal quantifiers; information
concerning beliefs, desires, and intentions; causal connections between states
of affairs; subjunctive conditional statements; and $λ$-abstraction.  Many
commonsense descriptions that are difficult or impossible to express
adequately in the standard predicate calculus are expressible directly and
naturally in NFLT.  While NFLT is much closer semantically to natural language
than is the standard predicate calculus, and is to some extent inspired by
psychologistic considerations, it is nevertheless a formal logic admitting of
a model-theoretic semantics.  The intended semantics incorporates a Fregean
distinction between sense and denotation, with associated principles of
compositionality.  A working paper was written which describes this language
\ref:Creary.1982..  Here is a relatively short example of NFLT's expressive power:

\ni English:

\numitem{}Being careful not to move anything else on his desk, John picks
up $\hbox{hammer}↓1$ with his right hand in order to drive a nail with it.

\ni NFLT Display Syntax:

\bcode
$\{∃$ $\up$X.CONCEPT($\up$X, OF:!DESK JOHN)
   $\up$Y.CONCEPT($\up$Y, OF:HAMMER1)
    Z.NAIL
  $\up$X1.CONCEPT($\up$X1, OF:Z)$\}$
        .PICKUP(JOHN, HAMMER1, FROM:!DESK JOHN,
                WITH:!RIGHT-HAND JOHN, INORDERTHAT:$\up$[DRIVE(I, $\down$[$\up$X1],
                WITH:$\down$[$\up$Y])], WITHCARETHAT:$\up$[$\{∀$Y1.PHYSOB Y1 $∧$
                ON(Y1, $\down$[$\up$X]) $∧$ $¬$(Y1 $= ¬\down$[$\up$Y])$\}$.$¬$MOVE(I, Y1)])
\ecode

\ni 2. MacLisp programs were written for the translation, display, and logical
analysis of NFLT formulas, and for the construction and management of a
``uniquifying'' discrimination net that stores all NFLT formulas.

The internal syntax of NFLT is variable-free and has many of the features
of semantic-network formalisms.  For input purposes only a somewhat
different Lisp-form syntax is presently used.  This is translated
immediately upon entry into the internal formalism.  For display purposes
only still another syntax is used, this one being more concise and much
more easily readable by people than the internal syntax, and it is closer
to standard logical notation than the input syntax.  Thus NFLT is
presently implemented with three distinct syntaxes---one for input, one
for memory and reasoning, and one for display.

NFLT-expressions that are in some way associated with potentially useful
information are presently stored in a discrimination net that maintains a
unique, canonical version of each compound concept and proposition in the
ANALYST's explicit memory.  An NFLT-expression is located in the
discrimination net by computing for it a unique list of indexing keys.
The first element of this key-list is a concept-construction operator
that specifies one way in which the embodied concept may be constructed
from component concepts.  The remaining members of the key-list are the
component concepts, either atomic or compound.  Each logically-compound
component concept is itself indexed in the discrimination net by its own
key-list, consisting of a concept-construction operator and component
concepts.  Thus the discrimination net implicitly records for each stored
compound concept a complete analysis into {\sl atomic} component concepts.
Quite apart from its role in the indexing scheme, this conceptual analysis is
of theoretical significance as a demonstration of the compositional semantics
of NFLT.

\ni 3. A matcher for NFLT was designed and partially implemented.  It is a
unifier and was implemented by writing a matcher-generator and applying
that generator to a description of the NFLT data-structure.  This
matcher-generator is a major result since it allows one to abstractly
describe pattern and datum data structures and to automatically generate a
unifying pattern matcher on those data structures.  A significant feature
of the class of matchers that the matcher-generator produces is that they
unify `star-variables.'  Star-variables match $0$ or more elements in a
single-level stream of elements.  The Gabriel reasoner (described below)
uses a tree-structure unifier which was the model for the
matcher-generator.  Recent work on the tree-structure unifier needs to be
incorporated into the matcher-generator and hence the NFLT matcher.
Recent insights concerning the use of subformulas of quantified NFLT
expressions as patterns also need to be taken into account.

\ni 4. Epistemological work was done on a scheme for causal reasoning that
will offer new solutions to the frame and qualification problems.  This
work is presently being completed and implemented in the ATREAS commonsense
reasoner described in item 7 below.

\ni 5. Detailed epistemological studies were made of some of the
commonsense factual reasoning involved in plan production and of the
interaction of this reasoning with the planning process.  This reasoning
was found to be quite complex.  As an example of some of the relevant
knowledge that we have formalized in these studies, we present one of the
commonsense causal laws that our reasoner is currently being developed to
utilize (Law of Causal Influence for Driving):

Here is a rough statement in English of the law:  ``If a person
tries to drive a car from location $L↓1$ to location $L↓2$ via drive-path
V, leaving at $T↓1$ and arriving by $T↓2$, then, provided that the car is
located at $L↓1$ at time $T↓1$, and that there is sufficient time for the
intended driving, and provided that certain further conditions are met
concerning the condition of the car, drive-path, and person, this will
tend to cause the person to succeed in his attempted driving.''

The following is a computer-readable formalized version of the law (NFLT
Lisp-form input syntax):

\bcode
((all p person
      x automobile
     $\up$x (concept $\up$x (of x))
     l1 location
    $\up$l1 (concept $\up$l1 (of l1))
     l2 location
    $\up$l2 (concept $\up$l2 (of l2))
      v (drive-path v l1 l2)
     $\up$v (concept $\up$v (of v))
     t1 time
    $\up$t1 (concept $\up$t1 (of t1))
     t2 time
    $\up$t2 (concept $\up$t2 (of t2)) )
 (tends-to-cause
    (and (located-at x l1 (att t1))
         (greater (!time-difference t1 t2) (!drive-time l1 l2 v))
         (try p ($\up$(drive i (vehicle ($\down$ $\up$x))
                           (from ($\down$ $\up$l1))
                           (to ($\down$ $\up$l2))
                           (via ($\down$ $\up$v))
                           (leaving-at ($\down$ $\up$t1))
                           (arriving-by ($\down$ $\up$t2)) ) )
                (while (and (run-ok x)
                            (drivable v)
                            (physically-able p drive)
                            (know-how p $\up$drive)
                            (effectively-identifiable 
                               p ($\up$(($λ$ p x l1 l2 v t1 t2)
                                    (drive p (vehicle x)
                                             (from l1)
                                             (to l2)
                                             (via v)
                                             (leaving-at t1)
                                             (arriving-by t2) ) ))
                                 (c1 $\up$i) (c2 $\up$x) (c3 $\up$l1) (c4 $\up$l2)
                                 (c5 $\up$v) (c6 $\up$t1) (c7 $\up$t2) ) ))
                (start-time t1) 
                (stopping-only-for-strong-reasons) ) )
    (drive p (vehicle x)
             (from l1)
             (to l2)
             (via v)
             (leaving-at t1)
             (arriving-by t2) )
    (strength sufficient-if-unprevented) ) )
\ecode

\ni 6. A storage manager for ANALYST has been implemented and tested.  However,
it is not presently being used because the space problems it would help to
alleviate have not yet arisen in our prototype implementations.

\ni 7. Two prototype commonsense-reasoning modules for the ANALYST were implemented
and tested.  One was written by Richard Gabriel and is used to explore modes of
reasoning, methods of evidence gathering, and efficiency issues in searching for
information.  The second reasoner, written by Lewis Creary, is partially based
on the first, but utilizes a more complex representation language (NFLT) and
reasoning graph, and a redesigned reasoning algorithm.  Gabriel's reasoner
is used for rapid experimentation with ideas, while the final ANALYST module
will descend from Creary's reasoner.  Results of the most successful
experiments with the former will be incorporated into the latter.

A pervasive and fundamental characteristic of commonsense factual reasoning is
the weighing of all reasonably discoverable relevant considerations before
drawing a conclusion.  The main structural components of a simple commonsense
factual inference are a target proposition (the conclusion or its negation)
and a number of separate {\sl considerations} (i.e., sub-arguments) that
bear evidentially on this proposition.  The various considerations bearing on
a single target proposition may differ in {\sl directional tendency}
(confirmation or disconfirmation), in {\sl inherent relative strength},
and in {\sl force} (i.e., absolute strength {\sl in situ}).  Values on
the first two of these attributes for a given consideration depend only on the
nature of the consideration, whereas the {\sl force} of a consideration in
a particular application depends also on how well its premises are known.

Both of our commonsense reasoning programs attempt to find and evaluate
all reasonably discoverable considerations bearing on the truth of a given
statement.  Gabriel's REASON program currently handles 19 specific types
of consideration.  Among the types with maximal inherent relative strength
are those corresponding to rules of deductive logic, such as tautological
considerations (supporting statements true because of their logical form),
Boolean considerations (for example, $\hbox{A}∧\hbox{B}$ is true if A and B are
true), modus ponens considerations (B is true if A and
$\hbox{A}⊃\hbox{B}$ are true), identity considerations
($\hbox{P}(\hbox{a})$ is true if $\hbox{b}=\hbox{a}$ and
$\hbox{P}(\hbox{b})$ are true), and quantificational considerations
($\hbox{P}(\hbox{a})$ is true if $∀x.\hbox{P}(x)$ is true).  Rewrite rules
are included as reasoning experts to obtain the force of standard logical
deductions. For instance $¬¬\hbox{S}$ is equivalent to S and
$¬∀x.¬\hbox{P}(x)$ is equivalent to $∀x.\hbox{P}(x)$.  There are 9 such
reasoning experts.

Among the consideration-types with less-than-maximal inherent relative
strength is the {\sl statistical} consideration, depending on a premise
containing a statistical quantifier, such as `GREAT-MAJORITY.'  When the
truth of $\hbox{can-fly}(\hbox{tweety})$ is supported by the statements
$\hbox{bird}(\hbox{tweety})$ and
$((\hbox{GREAT-MAJORITY} \hbox{x} \hbox{bird})\hbox{can-fly}(\hbox{x}))$,
the support is less strong than it would be if `GREAT-MAJORITY' were replaced
by `ALL.'  Stated another way, if one knows with certainty that Tweety is a
bird and that the great majority of birds can fly, then, provided that this
knowledge is not preempted by other statistical knowledge one has about
Tweety, it gives rise to a statistical consideration in support of the
statement that Tweety can fly.  While this consideration is genuine, it
provides no certainty that Tweety can fly.  On the other hand, if one knows
with certainty that that Tweety is a bird and that {\sl all} birds can fly,
then there exists a deductive consideration conferring certainty on the
conclusion that Tweety can fly.

There are other non-deductive consideration-types handled by Gabriel's
program, but they are presently subject to debate and possible revision.

\eat

Other non-deductive consideration-types (in addition to the statistical)
handled by Gabriel's reasoner are in a class of interesting
consideration-types which results from the observation that $P(a)$ may not
be the case, but perhaps $∃x.x=a∧P(x)$. This mode of reasoning can
continue with $∃y.∃x.y=x∧a=x∧P(y)$. Since this line of reasoning leads the
program to try to enumerate all of the individuals that it knows about,
cutting off the search at some point and reasoning about the extent of the
list collected so far must be performed if any conclusion is to be made.
For example, if a particular penguin is known, and if it is known that
this particular penguin is the only possible candidate penguin that could
be identified as Tweety, then it is only necessary to look to that penguin
in the search to make a strong case that if Tweety is not this particular
penguin, that Tweety is not, then, a penguin, and that Tweety can fly.

This mode of reasoning is similar to one that Allan Collins
suggested. That mode allowed one to state that if some fact were true,
then it would be important enough to be known.

Two reasoning experts are currently included in this class, along with an
expert to control combinatorial explosion during the search process.

A further embellishment on this mode of reasoning concerns the names
that are given to individuals. For instance, Tweety is represented
to the reasoning program as a data structure---a memory node of sorts---which
includes a name as part. Thus it is explicitly stated that the name of
the individual represented by the node corresponding to Tweety is `Tweety.'
There is a reasoning expert that infers that most individuals with different
names are distinct.

\endeat

The consideration-gathering process consists of building what was originally
conceived as a tree of dependencies, where the daughters of a node represent
the considerations that bear on that node.  The original tree-building process
was such that it was possible for two distinct nodes in the tree to have the
same content (with different links).  However, it was found experimentally
that if care is not taken to avoid such a duplication of node-contents,
serious inefficiencies in searching can arise.  For this reason, the
consideration-dependency-network is now a graph that is usually not 
a tree, with each node having a unique content.  A node in this graph is a
content-triple comprising a target proposition, a reasoning expert, and a
reasoning context, together with various links and pointers.  The uniqueness
of node-contents has been preserved (but with a changed notion of content) in
Creary's reasoner.

Creary's ATREAS commonsense-reasoning module for the ANALYST utilizes the
new representation-language, NFLT, and a very general reasoning graph capable
of supporting both forward and backward factual reasoning.  The reasoning
graph is the main data structure used by the commonsense reasoner and
consists of complex propositional nodes connected by complex labelled links
representing considerations (i.e., logical sub-arguments both for and against
various target propositions).  Commonsense reasoning begins with a specific
goal---to evaluate the epistemic status of a particular (usually small) set of
target propositions.  This goal, together with a certain allotment of
information-processing effort that may be expended on it, arises within some
larger problem-solving context involving factual inquiry, planning,
deliberation, or the like.  A goal-directed, heuristically-guided search is
conducted for considerations both for and against the target propositions.
This search may give rise to new target propositions, for which considerations
are also sought, in their turn.  The consideration-search terminates when
either it runs out of target propositions or the allotted effort (currently
in arbitrary units) has been expended.  Finally all the considerations
discovered are evaluated and composed and a conclusion is drawn, if
warranted.

The search for considerations is directed by specialized reasoning experts.
In a full bi-directional implementation there will exist both a forward and a
backward reasoning expert for each rule of evidence.  At present only a few
of the backward reasoning experts have been implemented.  Currently under active
development are more reasoning experts (both deductive and non-deductive) and
heuristic knowledge for guidance of the consideration-search.

There is presently on-line at SAIL a small demonstration version of the ATREAS
system, which can be run in order to get a feel for the general character of
the reasoning program, the major data structures that it uses, its supporting
interactive environment, and the NFLT representation language.  The premises
available for the demonstration include information about birds and penguins
in general, and about a particular penguin named `Slick.'  Given only that
Slick is a bird, that the great majority of birds can fly, and that no
penguins can fly, the program concludes it is very likely that Slick can fly.
However, if we add to these same premises the further information that Slick
is a penguin, the program will then conclude it is most {\sl un}likely
that Slick can fly.  This demonstration illustrates what is now widely
referred to as the `non-monotonicity' of commonsense reasoning: the fact
that the mere addition of new premises can invalidate a previously reached
conclusion---something that cannot happen in purely deductive reasoning.

The theoretical model of commonsense reasoning on which this reasoning program
is based is discussed in a forthcoming technical report \ref:Creary.1983..

\ni 8. A substantial interactive research environment for Creary's
commonsense reasoner was implemented, tested, and documented on-line.  It
consists of an interactive program control and four co-routined
sub-programs for exercise of the reasoner and examination of data
structures.  These sub-programs are as follows:

\numitem{a.}EXERCISE-COMMONSENSE-REASONER  (permits convenient interactive
exercise of the commonsense reasoning program).

\numitem{b.}EXPLORE-TASK-RECORD  (permits convenient interactive examination of
a previously executed agenda of reasoning tasks).

\numitem{c.}EXPLORE-REASONING-GRAPH  (permits convenient interactive examination
of reasoning graphs produced by the commonsense reasoner).  An important and
particularly useful feature of this sub-program is its ability to provide
concise, easily readable summaries of all the considerations found by ATREAS
during the reasoning process.

\numitem{d.}EXPLORE-DISCRIMINATION-NET  (permits convenient interactive
examination of a discrimination net that uniquely indexes logically compound
concepts and propositions).

These programs relieve much of the tedium that would otherwise be involved in
experimenting with the ATREAS reasoner and permit efficient high-level
debugging of inference rules and factual information.  The programs presently
have a combined repertoire of 58 high-level commands, each one requiring 2 or
3 keystrokes for the command mnemonic, and (typically) from 0 to 3 arguments.
These programs are largely self-documenting through the use of
help-commands that offer information at three different levels of detail.

\ssect Implementation Statistics.

The programs described in items 7 and 8 above presently comprise over 10,000
lines (excluding blanks) of sparsely commented (but fairly readable) MacLisp
code.  This breaks down roughly as follows:

$$\vcenter{\halign to size{\hfill#\tabskip 0pt plus 20pt
⊗#\hfill\tabskip 0pt\cr
2700 lines:⊗REASON commonsense reasoner  (Gabriel)\cr
200 lines:⊗Tautology checker (Gabriel)\cr
1400 lines:⊗ATREAS commonsense reasoner  (Creary)\cr
600 lines:⊗Context mechanism shared by both reasoners  (Gabriel)\cr
2400 lines:⊗Interactive research environment for ATREAS/NFLT  (Creary)\cr
2800 lines:⊗NFLT: programs for translation, display, logical analysis,\cr
{}⊗and uniquifying formula storage.  (Creary)\cr
400 lines:⊗Conceptual structures for ATREAS/NFLT  (Creary)\cr}}$$

These are the programs that we are currently using on a regular basis in our
experimentation and development work.

\ssect Objectives  (March 1983--December 1986).

\ni 1. Further detailed epistemological studies of the commonsense reasoning
involved in planning and plan recognition.

\ni 2. Detailed study of the heuristic knowledge employed in commonsense
reasoning, planning, and plan recognition.

\ni 3. Completion, testing, and further refinement of the NFLT matcher
mentioned in accomplishment 3.

\ni 4. Further development of and experimentation with the commonsense reasoning
component of ANALYST.

\ni 5. Completion, testing, and further development of the plan-producing
component of ANALYST.

\ni 6. Design and implementation of a non-trivial memory-indexing scheme for the
ANALYST (the current NFLT discrimination net is not intended for use in
information-retrieval).

\ni 7. Transfer of the ANALYST code to a Symbolics 3600 Lisp Machine (for increased
address space and speed).

\ni 8. Implementation of an ANALYST capability to recognize an agent's plans,
given his goals and behavior.

\ni 9. Implementation of an ANALYST capability to infer an agent's plans and
goals, given his behavior.

\ni 10. Implementation of a non-trivial advice-taking capability for the ANALYST.

\vfill\eject
\sect Formalisms for Reasoning about Programs.

Carolyn Talcott is studying formal systems and structures for representing
and proving properties of programs, including
computations, programming systems, and programming environments.

One objective is to account for and elucidate programming practices common
in the Artificial Intelligence community.  This includes studying the
kinds of programs and structures used both in building systems for
representing knowledge, reasoning, and planning, and in building systems
for building systems.

An important aspect of the AI style of programming is the use of programs
and other complex structures in a variety of ways.  Some examples are:

\ni 1. Using programs to compute (partial) functions.  This is the `obvious'
use of programs. The important point is that the data structures these
programs use often represent complex objects of the sort not usually
treated in work on program verification or specification.

\ni 2. Using programs as functional arguments.
For example, a mapping function takes a function and a structure
as arguments and produces a new structure by applying the given 
function to each component of the input structure.
Search routines are often parameterized by functions for
generating successor nodes or by functions for evaluating positions.

\ni 3. Using programs as functional values.  Specifying some of the
arguments of a function and then `partially' applying that function
produces a second function as a value.  The function
$λ(\hbox{x},\hbox{y})\hbox{x}+\hbox{y}$, when
supplied with the argument 7 for the parameter y, yields the function
$λ(\hbox{x})\hbox{x}+7$, which adds 7 to its
argument.  Combinators applied to functions return functions as values.
For example $\hbox{compose}(\hbox{f},\hbox{g})(\hbox{x}) =
\hbox{f}(\hbox{g}(\hbox{x}))$ and
$\hbox{subs}(\hbox{f},\hbox{g})(\hbox{x}) =
(\hbox{f}(\hbox{x}))(\hbox{g}(\hbox{x}))$.

\ni 4. Using programs to represent procedural information.
Information in data bases, knowledge, and problem solving strategies
may be represented procedurally (as a program).

\ni 5. Using programs to represent infinite or partially completed objects
such as streams, sets, or continuations.  Suppose there are many solutions
to a given problem. One means of representing the solutions is by a program
which, when queried, produces a solution and another program representing
the remaining solutions.

\ni 6. Using progams as functions that specify control structure.
Co-routines (co-operating processes) can be expressed using functions as
can many forms of delayed evaluation.  For example $λ()\hbox{f}(\hbox{x})$
represents f applied to x, but will only be evaluated if explicitly called
for (by application to an empty list of arguments).

\ni 7. Using programs as `actors' and as behavioral descriptions of complex
data structures. That is, instead of implementing a data structure as
a block of storage along with a set of programs for manipulating that
storage, one can write a program which takes manipulation names
and values as arguments and which updates its internal environment
to achieve the effect of the desired data structure, perhaps returning a 
program representing the updated data structure as a value.

\ni 8. Using programs as objects that can be constructed, transformed,
interpreted, and compiled.  We include here definition of data structures,
compiling, and program synthesis.

Other aspects of this style of programming include non-local control
mechanisms such as catch and throw, macros and metafunctions (for example,
to define new constructs or implement convenient abbreviations), `hooks'
into the underlying system (hooks provide dynamic access to the symbol
table and other data structures used by the interpreter and other system
routines), and use (often interchangeably) of both compiled and
interpreted code.

\ssect Goals and applications.

Our goal is to be able to treat the diverse uses of programs in an
integrated system.

We want to develop informal and formal methods to express and prove
properties that reflect naturally the intent of system designers and
programmers.  The informal part is based on a model of computation
(computation structure) that naturally accounts for the various styles of
programming encountered in AI systems.  There are two ways to mechanize
this model.  One is to encode the structure as data.  The second is to
define a formal theory for proving assertions about the structure.  The
connection between the two provides additional means of expressing
properties of the system.

Although the initial work will be abstract and technical, there
are many practical applications.  Some of these are summarized
below along with some important extensions.

\ni 1. Mechanization of the system via formalization of the metatheory and
model theory will provide an implementation of the system and a basis for
mechanical proof checking.  It also will provide tools for building
systems---by providing data types corresponding to some of the complex
structure used in these systems.

2. We will be able to prove properties of `real' Lisp programs, such as
programs that contain arbitrary assignments, non-local control such as
CATCH and THROW, as well as programs containing MACROs and META functions.
We will also treat programs that act on memory structures---such programs
in Lisp use operations that are destructive on list structure, such as
RPLACA and RPLACD.

\ni 3. Abstract data structures (domains and operations) will be formulated in
a natural manner within the proposed system.  Thus it will be used as a
framework for understanding data abstraction and, in particular, for
analyzing the notion of parameterized data type, which has been the subject
of recent investigation from several points of view.  See, for example,
\ref:Thatcher.1979. and \ref:Cartwright.1980..

\ni 4. We will formalize the notion of a computation system as computation
theory plus implementation.  Here we move from the static aspects of a
computation theory to the dynamic aspects of interactive systems.  An
interactive and dynamically changing system will be explained in terms of a
theory, a model, and an interpretation (dynamically) connecting the two.

\ni 5.  Formalization of the dynamic aspects of a programming system will
provide a framework for talking about declarations and definitions,
debugging, handling ERROR conditions, and distinguishing between different
kinds of undefinedness---abnormal termination versus non-termination.  A
clean, simple description of a system makes provision for and use of debugging
facilities easier, more straight-forward, and more effective.

\ni 6. We will be able to develop a model and implementation for logic
programs.  Easily identified, mathematically smooth subsets of the
computation theory can be interpreted as a model for logic programs.  Our
constructions are uniformly paramaterized by the data domains on which
computation is being defined.  This makes the connection with logic
programs ($\hbox{logic} (=\hbox{data})+\hbox{control} (=\hbox{PFN})$)
natural. A `PFN' is a computational equivalent to a partial function.

\ni 7. Ideas for expressing properties of high-level program descriptions
are useful for compiling and automatic program optimizations.

\ssect Understanding programming systems.

The sort of programming system we wish to study is a full interactive
Lisp-like system including language, interpreter, compiler, and debugger.
Our approach to understanding such a system---for instance, to be able to
make precise statements about its behavior, semantics, and extensions---is
to begin with a model of computation (computation structure) that reflects
the computational aspects of the system.  We then consider encodings of
this model in suitable data domains.

Within our model the semantics of a programming language can be expressed
in a number of ways, for example as the function computed by an
applicative program or by using structures representing the computation
defined by a program.  The connections between a program, the function it
computes, and the computation it defines can be made precise.  The
different views of programs (a) as functions---which may take functions as
arguments and return functions as values---and (b) as data can be
mechanized.  This is crucial for applications of the theory to other areas
of reasoning such as building models or plans.

\ssect Construction of computation structures.

A computation structure contains data, PFN's, a means of evaluating
expressions, a means of applying PFN's to arguments, and notions of
subcomputation.  Computation structures are constructed uniformly from
data structures---a collection of data and operations taken to be
primitive.  There is a natural domain of values extending the basic data
domain and including all PFN's as values.  Applicative, imperative,
iterative, recursive, and many other styles of programming are naturally
expressible as PFN's.

We begin with a data structure---the collection of data and operations
taken to be primitive---and construct from it a computation structure. 
Computation is defined using conditional expressions, abstraction, application,
environments, and closures. There is a natural domain of values extending
the basic data domain and including all PFN's as values.
Applicative, imperative, iterative, recursive, and many other styles of
programming are naturally expressible as PFN's.
Computation structures are uniformly parameterized by data structures
in the sense that the constructions define an effective mapping from
data structures to computation structures.

Two notions of computation are distinguished, which are called
`computation over a data structure' and `computation over a memory
structure.'  Computations over data structures use only the primitive
operations that construct and examine data; for example, in pure Lisp
they are CAR, CDR, CONS, EQUAL, and ATOM.  Computations over memory structure
use, in addition, destructive primitive operations, such as RPLAC's and
array assignments.  The same basic language serves to define either form
of computation.

Our notion of computation theory draws on ideas of an equation calculus
\ref:Kleene.1952. \ref:McCarthy.1963., hereditarily consistent functionals 
\ref:Platek.1966., abstract recursion theory \ref:Moschovakis.1969.
\ref:Fenstad.1980., and
primitive recursive functionals \ref:G\"odel.1958.. The resulting programming
system is related to SCHEME \ref:Steele.1975..

\ssect Multiple arguments and values.


In a computation structure PFN's are applied to a single argument which is
a `cart' of objects.  The result of applying a PFN, when defined, is a
cart of values.  Carts are like lists, but in a computation the list is
not constructed---virtual lists.  They provide a natural means of
expressing computations with multiple arguments and multiple values.  The
use of carts suggests a natural extension to the usual methods for
algebraic specification of data types.  It also suggests an interesting
class of (rational) operations on data domains.

\ssect Subsystems.

A useful technique for reasoning about programs is to identify natural
subclasses of PFN's and to formulate natural embeddings
among the classes where appropriate.  Specification of properties and
reasoning can then be carried out in the corresponding limited domain.

In our basic model many useful classes arise naturally from purely
`syntactic' considerations.  These include purely applicative,
simple imperative (loops with assignment limited to local
context), mixed imperative and applicative, and iterative.

There are also notions of `shape' and of `well-shaped' PFN's and
computations that seem to be useful for identifying interesting classes of
PFN's.  Shape is an intensional---essentially syntactic---analog to the
logical notion of type. Logical types are sets constructed from base sets
using constructions like cartesian product and exponential (function-space
formation).  See \ref:Scott.1975., \ref:Feferman.1977., or \ref:Feferman.1981..
Shape is related to notions of type or functionality used in combinatory
logic \ref:Hindley.1969. and \ref:Coppo.1980..
Shape is also related to notions of type used by programming and proof
systems that allow parameterized types \ref:Milner.1978..

Certain properties and principles hold for a PFN by virtue of 
its being well-shaped, and this can be used to simplify and guide proofs.
It can also be useful in the design of programs and for expressing
properties that may be of use to a compiler.

Another way of identifying interesting classes is to embed one computation
structure in another.  For example, to take advantage of methods developed
in temporal logic one would embed a temporal model of computation and
apply the temporal methods to the resulting subsystem.

\ssect Computations and programs as data.

The mechanization (formal encoding) of a computation structure turns PFN's
and computations into data allowing us to speak of operations on PFN's and
on computations.  In addition to being able to represent operations, we
can discuss properties of these operations.  Some properties of interest
are:

\numitem{i.}Soundness---preserving the function computed;

\numitem{ii.}Effect of an operation on the properties of the computation defined by a 
         program.  Properties of interest include
         recursion depth, memory usage, and other measures of execution cost;

\numitem{iii.}Program size.


The traditional kinds of operations on programs include transformations,
translations and program construction.  

A further kind of operation on programs is the construction of derived
programs; this is based on an idea of McCarthy.  Derived programs compute
properties of the original program.  Complexity measures can be expressed
via derived programs.  For example, depth of recursion, number of CONSes
executed, or number of subroutine calls are typcial complexity measures
for Lisp.  Properties of the these derived programs are similar to
properties treated in the traditional work in analysis of algorithms.
Many other properties of a program can be computed by derived programs.
Some examples are structures representing traces or complete computation
trees and other structures useful for debugging.

\ssect Notions of equivalence on computation structures.

When programs are used in the different contexts and with the variety 
of interpretations discussed above, questions such as correctness of 
program transformations become complex.  In order specify the intent 
and correctness of programs and operations on programs we must be able 
to express the precise sense in which programs are to be considered equivalent. 
The classical (extensional) notion is that PFN's are equivalent if 
they are defined on the same domain and have the same value on all arguments
in the domain of definition.  When arguments and values can be PFN's this 
is not an adequate definition.
Notions of equivalence are also important simply to express the fact that
different programs define the same computations or have the same
behavior.

\ssect Criteria for an adequate formal system for reasoning about PFN's.

In an adequate formal system definitions and assertions as well as
reasoning about computation structures should be naturally and efficiently
expressed.  This implies a number of criteria for such a system.

\ni 1. Data and PFN's should be part of the domain of discourse.  PFN's should
behave, in some respects, as functions.  They should also be able to be
arguments to and values of PFN's.  The collection of PFN's that can be
proved to exist should be closed under the usual constructions on
computable functions. These constructions include conditionalization,
abstraction, application, composition, closure, iteration (corresponding
to loop programs), and recursion.  Combinatory operations on PFN's should
be representable---as PFN's.  There should be principles for introducing
schemata for proving properties of special classes of PFN's.

\ni 2. There should be a means of introducing domains (sets) and operations
on domains including boolean operations, cartesian product, and formation of
finite sequences or trees.  Limited principles of inductive definition
together with the corresponding principles for proof by induction are also
needed.  Domains may contain data, PFN's, or both.

\ni 3. The system should be `parameterized' by the intended computation
domain (the primitive data types and operations).  This reflects the
principle of separation of control and data which is important for
modularity and stability of a computation system.

There have been many systems developed by logicians for formalizing
various kinds of mathematics including constructive analysis, constructive
algebra, and logic.  Many of the results can be easily modified to apply
to reasoning about programs.  Of particular interest is the work of
\ref:Feferman.1974., \ref:Feferman.1978., and \ref:Feferman.1981. on formal
systems for constructive and explicit mathematics (analysis).  Other
relevant work includes \ref:Beeson.1980., \ref:Martin-L\"of.1979.,
\ref:Nordstr\"om.1981., \ref:Scott.1977., and \ref:Tennant.1975..

\sect Accomplishments.

\ni 1. The definition of the applicative fragment of the computation structure
(called RUM) has been completed.  This includes the associated notions of
evaluation of expressions, application of PFN's, and the subcomputation
relation.

\ni 2. A variety of simple examples have been worked out in RUM.  These include:

\numitem{i.}representation of numbers and number-theoretic functions.

\numitem{ii.}definition of several combinators---as examples of the
expressiveness of both PFN's and of multiple arguments and values.

\numitem{iii.}computing with streams/continuations---statement and proof of
correctness of a simple pattern matcher using continuations to do
backtracking.  The pattern matcher produces a stream of matches.

\numitem{iv.}representation of tree-like structures using PFN's.

\numitem{v.}representation of re-entrant (circular) memory structures using PFN's.
These are essentially the structures implicit in PROLOG when `occur check'
is omitted from unification.  Using PFN's, structures corresponding to
solutions of sets of equations can be represented, but it seems that
destructive operations are not representable as (extensional) operations
on PFN's.

Detailed proofs of properties of the structures and PFN's defined were given.

\ni 3. RUM was extended to include CATCH and THROW (non-local control)
constructs.  Example proofs of properties of PFN's using these constructs
were worked out.  A computation-preserving map from the extended RUM to the
original was defined---CATCH elimination.  

\ni 4. We defined a class of PFN's called iterative PFN's which extends the
usual `first order' notion of (flowchart) programs \ref:Manna.1978..  We
generalized the invariant assertion, intermittent assertion, and subgoal
assertion methods of verification to this class of PFN's.  We proved the
Manna formulae \ref:Manna.1969. for partial and total correctness in this 
setting and also the equivalence of his total correctness formula to the 
intermittent assertions formula.

\ni 5. A Computation Induction scheme for recursively defined PFN's was defined
and used in many of the examples.

\ni 6. A paper describing the above work is in preparation.

\ni 7. Some progress has been made towards a formal theory.  Much work remains
to be done.

\ssect Objectives  (March 1983--December 1986).

\ni 1. We will continue the project of identifying, formalizing, and
proving interesting and useful properties of typical programs.
In particular we will develop theories of finite sets and finite functions,
and apply these theories to proving properties of programs.

\ni 2. We will define notions of equivalence useful for proving properties
of PFN's and use these definitions in developing the theory of PFN's.

\ni 3.  We will develop mathematical properties of algebraic structures with
multiargument/multivalue operations (cart algebras) and identify some
useful notions of homomorphism.  The results will be applied to
representation of PFN's and to proving properties about PFN's and
programs.  It will also be applied to ideas for data structure definition
using cart algebras.

\ni 4. We will define encodings of computation structures as data 
and use these to explain macros, metafunctions, and dynamic aspects
of computation systems.

\ni 5. We will continue work on our ideas for using shapes to facilitate
reasoning about PFN's.  In particular:

\numitem{i.}we will derive rules for proving properties of well-shaped
PFN's;

\numitem{ii.}we will use shapes to identify a class of `hereditarily consistent'
PFN's \ref:Platek.1966. for which a generalization of McCarthy's
minimization scheme holds; and

\numitem{iii.}we will use rational shapes (the Y combinator is well-shaped) to treat
self-applicative PFN's, streams, and other interesting PFN's.

\ni 6.  We will further develop methods for proving properties of memory structures.
There is a simple extension to our basic model that allows treatment of
programs operating on such structures.  What is needed is to develop
methods that make it practical and useful to prove properties
of these programs.

\ni 7.  We will formalize both the general methods for producing derived
programs and the computation defined by these programs to provide means for
stating and proving the `correctness' of derived programs.

We will identify further properties to be computed by derived programs and
methods for deriving them.  We will investigate the method of specializing 
an interpreter as a general method for producing derived programs.

\ni 8.  We will continue work on defining a formal system for reasoning about
PFN's.  We will define a basic formal system and enrich the collection of
available formal notions via definitions and derived principles so that
the work of formalization can be carried out in a direct and natural
manner.

We will also identify principles for extending the formal system based on
the formal metatheory.

\vfill\eject
\sect EKL.

Jussi Ketonen and Joseph Weening have been working on further development of
EKL, an interactive proof checker in high-order predicate calculus.

The emphasis has been on creating a system and a language thath would
allow the expression and verification of mathematical facts in a direct
and natural way. The approach taken has been quite successful: Ketonen has
been able to produce an elegant and eminently readable proof of Ramsey's
theorem in under 100 lines. At the same time EKL is versatile enough to be
able to verify the associativity if the Lisp APPEND function in just one
line.

The above results are important in that they indicate that programs
previously thought to be too complex for mechanical verification are now
within the reach of sufficiently powerful proof-checking systems.

\ssect Accomplishments.

\ni 1. A manual reflecting the state of EKL during December 1982 was written
by Weening and Ketonen.

\ni 2. EKL has been tested by its use in CS206, a course on Lisp programming
at Stanford. As homework assignments, students used it to prove facts
about Lisp programs.

\ni 3. One of the outgrowths of this development work has been a formalisation
of the underlying logic of EKL.  EKL was based on a high-order predicate
logic since we felt that it is important to be able to discuss functionals
and lambda-abstraction in a natural way.  At the same time it seemed to us
that the currently fashionable formalisations of high-order logic were
still inadequate for expressing mathematical facts. Our approach was to
gradually modify the logic of EKL to accomodate intuitively
straightforward extensions without worrying too much about the formal
underpinnings.  It is satisfying to note that we have now come full
circle:  Ketonen has shown that the logic of EKL can be formally expressed
(along with its consistency proofs, etc.) in a very elegant and precise
way---in fact, in a more elegant way than the logics we started out with.
This theory also embodies the way EKL can talk about meta-theoretic
entities---to our knowledge it is one of the first instances of a logical theory
of mathematics that can discuss denotations in a coherent way.  Ketonen
and Weening are in the process of writing a paper on this.  A large amount
of effort has been spent on the EKL rewriting system---currently the most
powerful and versatile component of EKL.  Ketonen has come up with a new
architecture for rewriting systems that is quite different from the ones
currently used, say, by Boyer and Moore.  Accompanying the rewriter there
is also a notion of a language for rewriting---how to control the process
through simple instructions to EKL.  This has turned out to be a fairly
powerful tool in reducing the lengths of proofs in EKL.

\ni 4. A decision procedure for an important fragment of the `obvious' theorems
of first order logic was designed and implemented.

\ssect Objectives.

During the next two years Ketonen plans to:

\ni 1. apply the techniques for proof checking that he has
developed to verifying the correctness of the various parsing algorithms
that are currently in fashion.

\ni 2. develop more powerful proof manipulation techniques in order
to study transformations of programs and proofs.  This methodology will then be
used to extract computational information out of correctness proofs of
programs and other mathematical facts.

\ni 3. continue his research on logical algorithms.  The paper of Ketonen and
Weyhrauch on a natural, decidable fragment of predicate calculus has been
submitted for publication.  Many interesting problems remain: for example,
the proper use of equalities in the decision procedure mentioned above and
its correct implementation. In implementing a new version of this decision
procedure Ketonen intends to use and refine the notion of postponement
developed by McCarthy.

\ni 4. extend the EKL rewriter further; an important future topic is the use
and elimination of definitions. Another direction for Ketonen's research
is the use of high-order unification in rewriting: Ketonen has shown that
the high-order unification algorithm of Huet terminates when the unifiable
variables come from only one side; a situation that occurs naturally in
rewriting.  As is well known, the termination problem for high-order
unification is unsolvable in general.

\vfill\eject
\sect Lisp Performance Evaluation.

Richard Gabriel is studying the issue of performance evaluation of Lisp
systems, both in general by studying the ingredients of performance
of a Lisp implementation and in particular by benchmarking a number
of important existing Lisp implementations.

\ssect Accomplishments.

\ni 1. 21 Lisp benchmarks were designed. At the Lisp and Functional
Programming Conference, the attendees interested in performance evaluation
stated that they would prefer to see Richard Gabriel do all of the
benchmarking personally.  Therefore, he is currently benchmarking those
implementations most often inquired about. These are the Xerox D series
(Dolphin, Dorado, and Dandelion), the Symbolics LM-2 and 3600, Vax Lisps
(Franz, DEC/CMU Common Lisp, InterLisp-Vax, NIL, and PSL), MC68000-based
Lisps (Apollo running PSL, SUN running PSL and Franz, and HP-9836 running
PSL), and DEC-10/20 Lisps (MacLisp, InterLisp-10, and Elisp).

\ni 2. One benchmark has been run on all of the various Lisp configurations, and
there are 75 measurements. These results have been made available to several
ARPA contractors to aid hardware acquisition decisions.

\ni 3. A paper, ``Performance of Lisp Systems,'' has been written (with Larry
Masinter of Xerox PARC) and published in the proceedings of the ACM
Symposium on Lisp and Functional Programming; this paper discusses the
methodological issues for performance evaluation and presents some of the
results mentioned in 1.  This paper will be part of the final report
delivered in December 1983. 

\ni 4. Some qualitative statements on the Lisp systems in the study have been
gathered for the final report. These are facts about the Lisp systems that are
of interest in comparing implementations and which are not performance-centered.

\ni 5. A detailed analysis of the DEC/CMU Vax Common Lisp has been completed by
Richard Gabriel and Martin Griss (from the University of Utah). This analysis
is being prepared for publication.

\sect New Activities.

The S-1 MarkIIA computer, being developed at Lawrence Livermore National
Laboratory, is the first super-computer-class computer that is of possible
interest to the artificial intelligence community. A Common Lisp is being
written for that computer. We would like to include that machine in
our list of evaluated computers. This requires an extension of time for
this portion of the evaluation project since that machine is not yet
completed in a uni-processor configuration, and a multi-processor
configuration is about one year away.

\sect Objectives.

\ni 1. Complete the current suite of benchmarks on each of the implementations
mentioned.

\ni 2. Obtain the set of results and collate them into the final report, with
interpretations of their significance, to be submitted in December 1983.

\ni 3. Several other invesigators have mentioned that they have noticed that
the sum total of results of a number of small benchmarks can conflict with
the results a larger benchmark. It is proposed that a follow-on project be
undertaken to investigate the predicative power of small benchmarks on the
performance of a large program on a given Lisp implementation.

\ni 4. While translating these benchmarks to InterLisp, it was noted by expert 
InterLisp programmers that the programming style of the 21 benchmarks
described above is contrary to good InterLisp programming style, and that
the D-Series computers and their InterLisps have been tuned to this
other style. This brings up the question of how programming style and performance
of a Lisp system interact. For instance, is it the case that a performance
profile for a Lisp system implies a style profile? Or is it the case that a
style profile causes the implementors to tune a performance profile.
we propose to study these issues.

\ni 5. After the above measurements have been made, new Lisp
implementations will continue to be developed. A low-resource benchmarking
effort with periodic updates to the benchmark data will be maintained
through 1986.

\ni 6. The S-1 MarkIIA uni- and multi-processor will be benchmarked, both
with respect to Lisp performance and with respect to overall performance
as compared with the DEC 2060 and other currently used processors. We will
draw on this set of Lisp benchmarks and any standard benchmarks for other
languages that may exist. This will be completed and a report submitted to
DARPA by December 1984.

\vfill\eject
\sect Automatic Construction of Special-purpose Programs.

Chris Goad has been working on the automatic construction of
special-purpose programs.  In software development the aims of efficiency
and generality are often in conflict.  It is common that specialized
programs, which take into account complex details of the situation in
which they are used, can be made much faster than cleaner, more general
programs. The conflict can be eased through the use of a kind of automatic
programming in which fast, specialized programs for a collection of related
tasks are constructed automatically. This style of automatic programming
enterprise---which may be referred to as special-purpose automatic
programming---differs in several respects from the better known variety in
which programs are automatically derived from logical specifications.
First, the primary aim of special-purpose automatic programming is
efficiency rather than replacement of human programming effort, although
there is of course substantial overlap between these aims. Second, special-purpose
automatic programming is far more tractable at the current stage
of knowledge than the other variety, since the class of programs to be
generated by any one synthesizer of special-purpose programs is
comparatively small.  We propose to continue our current efforts in
special-purpose automatic programming, with emphasis on applications to
computer graphics and computer vision.  The problems in graphics and
vision that will be attacked take the form of automating the generation
of programs that are specialized to the task of rapidly displaying---or
recognizing---particular three-dimensional objects.

In both computer graphics and computer vision a geometric computation is
undertaken which involves three pieces of information: (1) a physical
configuration C of objects in space, (2) a position and orientation P of
the viewer relative to C, and (3) a digital image I.  In computer
graphics, I is computed from C and P; in computer vision, C and P are
computed from I.  In many practical applications of both computer vision
and computer graphics, the configuration C is fixed in advance for any
particular series of computations; it is only P and I that vary within the
batch of computations.  On the computer graphics side, an example is
provided by flight simulation, where a landscape is fixed in advance of
any particular simulated flight. The appearance of this fixed landscape
must be computed from a viewer position and orientation which changes
every thirtieth of a second.  On the computer vision side, examples are
provided by the many industrial applications of automated vision where the
vision task takes the form of recognizing and locating a particular
three-dimensional object in a digitized image.  The exact shape of the
object to be perceived is known in advance; the purpose of the act of
perception is only to determine its position and orientation in space
relative to the viewer.  Again, C is fixed in advance, while P and I vary.

Problems of the kind just described can be attacked by use of special-purpose
automatic programming.  What one does is to generate a special-purpose
program $P↓C$ for each configuration $C$ to be dealt with; this
special-purpose program is then used for each computation in which the
configuration $C$ appears.  The advantage of this approach is speed.
Since the special-purpose program $P↓C$ has a very limited task to
perform---it is adapted to handling just one configuration---it can be
much faster than any general-purpose program would be for the same
configuration.

Computer graphics and vision are good applications for special-purpose
automatic programming for several reasons.  First, there is much to be
gained since speed is of central importance in many practical
applications.  Second, the various applications within computer vision and
graphics have enough in common that fairly specific tools developed for
one application nonetheless carry over to others.  Examples will be given
later in this proposal.  Finally, automatic deduction plays a central role
in all varieties of automatic programming, including the special-purpose
variety.  Each application of automatic programming gives rise to a
different class of propositions whose truth values need to be decided
automatically.  The propositions which arise in graphics and vision
applications are for the most part simple geometric assertions.  A well
developed technology for deciding geometric propositions exists---notably
in the form of decision procedures such as the simplex algorithm for
restricted classes of geometric problems.  Applications of special-purpose
automatic programming to graphics and vision derive a substantial part of
their power from this existing technology for automatic geometric
deduction.

Goad started work on the application of
special-purpose automatic programming to computer graphics in early 1981;
work on applications to vision began in mid-1982.
As reported below, good results have been obtained for the hidden surface
elimination problem in computer graphics.  Work on vision is 
still underway, but the
preliminary results look very promising.

The same general scheme
has been employed for both applications, and there is also substantial overlap
in the specific machinery needed to implement the scheme.
In particular a central process in both applications is the automatic
design of an appropriate case analysis for the solution of the problem.
This case analysis design involves dividing up the class of possible
geometric configurations into subclasses within which regular behavior can
be obtained.  In vision the restriction to regular behavior amounts to a
requirement that the visibility of features be determined and also that
the relationships among them depend in a smooth way on the position and
orientation parameters.  In graphics regular behavior amounts to
the requirement that a single back-to-front ordering of
the surface elements to be displayed be valid within the case at
hand.  In both graphics and vision, exhaustive forms of the case analysis
are obtained from general purpose algorithms, and their final, more
efficient form is obtained by a specialization process which depends
heavily on geometric automatic deduction.  The role of automatic deduction
is essentially that of determining which sets of relevant geometric
conditions (relevant to regularity in the appropriate sense) can co-occur,
and which cannot.

Based on our experience so far, we think it very likely that a wide class
of related applications in graphics and vision can be attacked with a
comparatively small collection of tools for the automatic synthesis of
special-purpose programs.  The tools will lie at two levels of generality.
First, there are absolutely general tools for program manipulation (such as
unfolding and pruning) which are completely independent of subject matter,
and, second, there are tools which apply to a variety of algorithmic
problems within a particular mathematical domain.  The mathematical domain
which underlies the class of problems which we have chosen is
three-dimensional geometry, and the domain-specific (but not problem-specific)
tools that we use include tools for the systematic construction of
geometric case analyses and for geometric deduction.  Three-dimensional
geometry is a good choice for this kind of work since, as a result of a
vast amount of previous effort, there are many existing tools and methods
to be exploited.

\ssect Accomplishments.


\ni 1. A program for synthesis of special-purpose hidden surface 
elimination programs has
been implemented and tested on a large-scale example provided by the
Link-Singer Corporation. On this example, the generated program was 10 times
faster than the fastest and most widely used of the previously known algorithms.
This work is described in \ref:Goad.1982..

\ni 2. A synthesis method for special-purpose vision programs has been
designed and partly implemented.  The special-purpose vision programs
which are generated have the task of recognizing and locating particular
objects in an image.  The general scheme employed for object recognition
involves sequential matching of object features to image features.  By
exploiting the advance information available about the object being
sought, very large speed-ups (perhaps as much as 100-fold) can be
obtained.  The practical outcome is that the `high-level' aspects
(that is, the matching process) of model-based 3-dimensional vision can be made
fast enough for industrial applications---even when running on a
micro-processor.

\ssect Objectives.

We are proposing to continue our work in special-purpose automatic
programming for a collection of related applications in three-dimensional
computer graphics and computer vision.  The work proposed here has two
aims:  first, to produce faster programs than any currently available for
specific practically important problems in computer graphics and vision by
the use of special-purpose automatic programming, and, second, to further
develop general tools for special-purpose automatic programming.

The first priority is to complete the work on vision described in the
objectives section above.  
In addition we propose to extend our work in vision in the following directions.

The first might be called `model-based low-level vision.'  The low-level
functions that we are using in our current vision work are generic
edge-finding operators of a conventional kind; they are based on local
operators whose intent is to detect evidence of discontinuities in the
(discrete) intensity surface.  No use is made of the values of the
intensity function except at presumed discontinuities.  However, in model-based
vision application a great deal of detailed information is
available concerning the object to be recognized beyond a simple
specification of where its edges lie.  The reflectivity function of the
object surfaces can be determined, as can the surface geometry between
edges.  Further, object edges often have a microstructure which produces
something more complicated than a simple discontinuity in the image
surface (for example, it is not uncommon to have a very narrow specular
reflection running along an edge).  Finally, in industrial situations
prior information is typically available about lighting conditions.  All
of this specializing information can be exploited to produce more
effective special-purpose vision programs; vision programs which rely on
special rather than generic edge operators, and which make use of shading
information as well as edge information.

The second area involves making use of weaker---rather than
stronger---specializing information.  In the work we have done so far we
have assumed that the exact shape of the object to displayed or recognized
is available in advance, so that programs adapted to that particular shape
can be constructed.  Often, however, less specific, but nonetheless useful
information is available in advance.  For example, in computer graphics,
many applications involve displaying a number of rigid objects, which move
relative to each other, so that there are more sources of variation at
run-time than the position of the viewer.  In computer vision, it is
useful to consider constructing special-purpose programs for classes of
objects such as the class of all airplanes (see \ref:Brooks.1981.), rather than
for specific objects.  Advance information may also take the form of
geometric constraints which apply to the situation at hand.  For example,
it might be known in advance that one object will rest on top of another,
without specific information being available about the position of either
object.  We would like to be able to exploit arbitrary advance knowledge
expressed by geometric constraints.  This aim is closely related to a
central aim in expert systems or knowledge engineering work, in which one
seeks to separate declarative information (which nonetheless has
algorithmic relevance) from direct algorithmic content.  Our strategy for
realizing this aim in the domains of graphics and vision is to develop
methods for compiling declarative information (constraints) into efficient
special-purpose programs.  With such methods available the user is able
to express his knowledge about individual situations declaratively; the
extraction of algorithmic content is automated.  This contrasts with the
usual knowledge engineering approach, which involves separating the
`knowledge' (expressed as rules) from the use of that knowledge by a
uniform inference procedure.  Acronym \ref:Brooks.1981.. implements the latter
approach for the vision domain in that geometric rules and constraints are
handled by fully general-purpose mechanisms.  Partly as a consequence of
this architecture, Acronym is at present far too slow for most practical
vision applications.  In any case, we plan to investigate the ways in
which our tools apply in situations where specializing information takes
the form of geometric constraints.

The third area involves exploiting continuity of motion in tracking moving
objects (vision) and in displaying moving objects (graphics).  If images
are repeatedly acquired or generated at short time intervals for
configurations which move in a continuous manner, then it is normally not
necessary to start each computation from scratch.  Instead, relevant
information may be retained from one computation (frame) to the next and
updated to take the (minor) changes between frames into account.  When the
shape of the object being tracked or displayed is known in advance, the
natural formulation of the special-purpose programming task is this: one
wishes to automatically generate a special-purpose program for performing
the inter-frame update computation for the object in question.  The
selection of the information to be retained from one frame to another may
also be automated.  In attacking this problem, we can draw on the recent
work of Paige \ref:Paige.1982. on domain-independent methods for
automatically generating update computations (Paige refers to this process
as `formal differentiation').

\vfill\eject
\sect Epistemology and Formalization of Concurrent and Continuous Action.

Yoram Moses is currently working on the epistomology and formalization of
concurrent and continuous action. A key barrier to progress in artificial
intelligence is that present systems deal mainly with discrete events that
result in a definite new situation rather than with events occuring
concurrently and continuously. This work bears some relation to concurrent
programming, but there are some definite differences. The synchronization
problem, crucial in the concurrent programming case, seems less central
here, whereas the handling of partial information and causality are more
important for an artificial intelligence program.

Hans Berliner's thesis presents a chess position that is a forced win for
white, but this fact is not straightforwardly computable in a `reasonable'
amount of time.  Further, it appeared to McCarthy that the natural proof
of this fact involves an essentially concurrent argument.  A proof has
been worked out in detail which uses a bottom-up approach and does not use
concurrent arguments. Thus this is no longer a valid example problem.

The examples that are being considered now involve real-world activities,
such as the decision-making needed to cross a street without being hit
by a car and to pass by a revolving sprinkler without getting wet.
Both of these problems require observation and prediction of continuous
events as well as the concurrent activity of the agent and the surrounding
environment.

Eventually it is hoped that a system could be built that would be capable
of guiding a robot in a competitive two-sided sport such as tennis. Of
course that is a long term goal, and the first steps in that direction
will be pursued.

In epistemology, some special cases of McCarthy's circumscription are
being analyzed, specifically the propositional case, and generalizations
and insight into circumscription are expected to result.

A paper is being prepared dealing with the representation and encoding of
relations by graphs. This paper will be presented at the SIAM Symposium on
the Applications of Discrete Mathematics held in June 1983.

\vfill\eject
\sect Proving Facts About Lisp.

Joseph Weening is investigating the systematization of proving of facts
about Lisp programs.  In contrast to earlier work, an attempt will be made
to consider the full range of Lisp programs in a dialect such as MacLisp
or Common Lisp.  The `systematic' aspect will be to represent both
programs and proofs in a framework that can be manipulated by a computer.
The main reason for doing this is to allow verification, by the computer,
of the proofs which are presented.

Earlier work in this direction (cf. McCarthy and Cartwright, Boyer and
Moore) used a subset of Lisp, generally known as `pure Lisp,' which
consists of systems of recursive functions defined by conditional
expressions and primitive functions for numeric and list manipulation.
The aspects of Lisp that are not treated in this way include (1)
assignment statements, which change the values of variables; (2)
destructive operations, which change data structures; (3) sequential
programs; and (4) non-local control structures such as CATCH and THROW.

Use of the computer to represent programs and proofs, and to aid in the
generation and verification of proofs, will initially be through the EKL
proof-checker.  It is likely that some extension or modification of EKL
will be found desirable in order to provide a workable system.

The system will be considered successful when it becomes fairly routine to
prove the correctness of non-trivial Lisp programs.  It is also envisioned
that the system will be used as a tool in a course on Lisp programming
and that students in the course will be able to prove the correctness of
programs that they write.

\vfill\eject
\sect The Formal Reasoning Group.

John McCarthy is Professor of Computer Science and has worked in the area
of formal reasoning applied to computer science and artificial
intelligence since 1957.  He is working on the logic of knowledge and
belief, the circumscription mode of reasoning, formalizing properties of
programs in first order logic, and a general theory of patterns.

Lewis Creary is a Research Associate in Computer Science, received his
doctorate from Princeton University in philosophy, and has been concerned with
epistemology, semantics, logic, mechanized inductive learning, and design
principles for a general intelligence.  He is working on epistemological
problems of AI, representation of knowledge, commonsense reasoning and
problem-solving, and development of the advice-taking ANALYST program.

Richard Gabriel is a Research Associate in Computer Science, has a Ph.D.
from Stanford University in computer science, and has been active in the fields of
computer vision, natural language understanding, knowledge representation,
programming environments, programming language development, natural
language generation, Lisp system performance, and organizations for very
large intelligent programs.  He is working on the advice-taking ANALYST,
Lisp performance evaluation, and Common Lisp.

Christopher Goad is a Research Associate in Computer Science, has a Ph.D.
from Stanford University in computer science, and is active in the fields of
computer vision, special-purpose automatic programming, and proof theory.

Jussi Ketonen is a Research Associate in Computer Science.
He received his Ph.D. from University of Wisconsin in mathematical 
logic. He is interested in the mechanizable aspects of formal reasoning.

Yoram Moses is a graduate student and Research Assistant in Computer Science.
He has a B.Sc. in mathematics from the Hebrew University in Jerusalem with
an emphasis on mathematical logic and computer science. He is currently working
towards a Ph.D. at Stanford.

Carolyn Talcott has a Ph.D. in chemistry from the University of California
at Berkeley and has done research in theoretical chemistry and theory of
finite groups.  She is currently interested in the formalization of
concepts and activities relating to computation and the eventual
application of the results to representation of reasoning and problem
solving activities.

Joseph Weening is a Ph.D. student and Research Assistant in Computer Science.
He received a bachelor's degree from Princeton University in mathematics and
is interested in the application of mechanized formal systems to the semantics
of programming languages, with the goal of producing machine-assisted proofs
of program properties.

\vfill\eject
% References
\baselineskip 12pt
\vskip .3truein

\ctrline{\:N References}

\makeref:Beeson.1980.{
Beeson, M., \article{Formalizing Constructive Mathematics: Why and How?} in
Richman, F.(ed.) \book{Constructive Mathematics, Proceedings New Mexico}, 1980
Lecture Notes in Mathematics 873, Springer-Verlag (1980) pp.  146--190.}

\makeref:Brooks.1981.{
Brooks, R. A., \article{Symbolic reasoning among 3D models and 2D images}, AI
Journal, vol 16, 1981.}

\makeref:Cartwright.1980.{
Cartwright, R., \article{A Constructive Alternative to Axiomatic Data Type
Definitions}, Proceedings LISP Conference, Stanford (1980).}

\makeref:Coppo.1980.{
Coppo, M. Dezani-Ciancaglini Venneri, B.,
\article{Principal Type Schemes and Lambda Calulus Semantics}
in Seldin, J. P. Hindley, J. R. (eds.) 
\book{To H. B. Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism},
Academic Press (1980)
pp.535--560.}

\makeref:Creary.1982.{
`\article{A Language of Thought for Reasoning About Actions},  (unpublished working
paper on NFLT, June, 1982).}

\makeref:Creary.1983.{
\article{On the Epistemology of Commonsense Factual Reasoning:  Toward an Improved
Theoretical Basis for Reasoning Programs}, (forthcoming Stanford CS Report).}

\makeref:Feferman.1974.{
Feferman, S.,
\article{A Language and Axioms for Explicit Mathematics}
in Dold, A. and Eckman, B. (eds.) \book{Algebra and Logic},
Lecture Notes in Mathematics 450, Springer-Verlag (1974)
pp.87--139.}

\makeref:Feferman.1977.{
Feferman, S.
\article{Theories of Finite Type Related to Mathematical Practice}
in  Barwise, J. (ed.)
\book{Handbook of Mathematical Logic},
North-Holland 
pp.913-971.}

\makeref:Feferman.1979.{
Feferman, S.,
\article{Constructive Theories of Functions and Classes}
in Boffa, M., vanDalen, D., and McAloon, K.(eds.)
\book{Logic Colloquium 78}, University of Mons, Belgium (August 1978)
North-Holland (1979)
pp. 159--224.}

\makeref:Feferman.1981.{
Feferman, S.,
\article{Theories of Variable Finite Type},
(draft).}

\makeref:Feferman.1982.{
Feferman, S.,
\article{Inductively Presented Systems and the Formalization of Meta-Mathematics} 
in vanDalen, D., Lascar, D.,and Smiley, J. (eds.)
\book{Logic Colloquium 1980},
North Holland 1982
pp. 95--128.}


\makeref:Fenstad.1980.{
Fenstad, J. E.,
\article{General Recursion Theory: An Axiomatic Approach}
Springer-Verlag.}

\makeref:Gabriel.1982.{
Gabriel, R. P., Masinter, L. M.,
\article{Performance of Lisp Systems},
Proceedings of the 1982 ACM Symposium on Lisp and Functional
Programming, August 1982.}

\makeref:Gabriel.1983.{
Gabriel, R. P., Griss, M. L.,
\article{Lisp on Vax: A Case Study},
in preparation.}

\makeref:Goad.1982.{
Goad, C. A., \article{Automatic construction of special purpose programs for
hidden surface elimination}, Computer Graphics, vol. 16 No. 3, July 1982,
pp. 167--178.}

\makeref:G\"odel.1958.{
G\"odel, K.,
\article{On an Extension of the Finite Standpoint Never Used Before 1958},
Dialectica 12 (1958).}

\makeref:Hindley.1969.{
Hindley, R.,
\article{The Principle Type-Scheme of an Object in Combinatory Logic},
TAMS 146
pp. 29--60.}

\makeref:Ketonen.1982.{Ketonen, J., Weening, J.,
\article{EKL---An Interactive Proof Checker}, user's reference manual,
in circulation.}

\makeref:Ketonen.1982.{Ketonen, J., Weyrhauch, R.,
\article{A Decidable Fragment of Predicate Calculus}, submitted for
publication.}

\makeref:Kleene.1952.{
Kleene, S. C.,
\article{Introduction to Metamathematics},
Van Nostrand.}

\makeref:Manna.1969.{
Manna, Z.,
\article{The Correctness of Programs},
J Computer and System Sciences 3,
pp. 119--127.}

\makeref:Manna.1978.{
Manna, Z.,
\article{Six Lectures on The Logic of Computer Programming},
SIAM CBMS-NSF Regional Conference Series in Applied Mathematics No. 31 (1980).}

\makeref:Martin-Lof.1979.{
Martin-Lof, P.,
\article{Constructive Mathematics and Computer Programming},
International Congress of Logic Methodology and Philosophy of Science
(to appear).}

\makeref:McCarthy.1959.{
McCarthy, J., \article{Programs with Common Sense},
Proc. Int. Conf. on Mechanisation of Thought Processes, Teddington, England, 
National Physical Laboratory, (1959).}

\makeref:McCarthy.1963.{
McCarthy, J.,
\article{A Basis for a Mathematical Theory of Computation}
in Braffort, P. and Herschberg, D. (eds.), \book{Computer Programming and
Formal Systems},
North-Holland  (1963)
pp. 33--70.}

\makeref:McCarthy.1982.{
McCarthy, John, \article{Coloring Maps and the Kowalski Doctrine}.}

\makeref:Milner.1978.{
Milner, R.,
\article{A Theory of Type Polymorphism in Programming},
J Comp.Sys.Sci 17,
pp. 302--321.}

\makeref:Moschovakis.1969.{
Moschovakis, Y. N.,
\article{Abstract First Order Computability I, II},
Trans.Am.Math.Soc. 138 (1969),
pp. 427--464, 465--504.}

\makeref:Norstr\"om.1981.{
Norstr\"om, B.,
\article{Programming in Constructive Set Theory: Some Examples},
Proc. ACM Conference on Functonial Programming and Languages and 
Computer Architecture (October 1981),
pp. 141--153.}

\makeref:Platek.1966.{
Platek, Richard A.,
\article{Foundations of Recursion Theory},
PhD Thesis, Stanford University, 1966.}

\makeref:Scott.1975.{
Scott, D.,
\article{Data Types as Lattices},
SIAM J. Computing 5(1976)
pp. 522--587.}

\makeref:Scott.1977.{
Scott, D.,
\article{Identity and Existence in Intuitionistic Logic}
in Fourman, M. P., Mulvey, C. J., and Scott, D. (eds.),
\book{Applicatons of Sheaves, Proceedings Durham 1977},
Lecture Notes in Mathematics 753, Springer-Verlag (1979)
pp. 660--696.}

\makeref:Steele.1975.{
Steele, G. L., Sussman, G. J.,
\article{SCHEME: an Interpreter for Extended Lambda Calculus},
MIT AI Memo 349 (December 1975).}

\makeref:Tennant.1975.{
Tennant, N.,
\article{Natural Deduction for Restricted Quantification Identity and Descriptions},
5th International Congress of Logic Methodology and Philosophy of Science (1975)
pp. I--51.}

\makeref:Thatcher.1979.{ Thatcher, J. W., Wagner, E. G. and Wright, J. B.,
\article{Data Type Specification:Parameterization and the Power of
Specification Techniques}, IBM Research report RC-7757 (1979).}

\vfill\end